REJOINDER : THE DANTZIG SELECTOR : STATISTICAL ESTIMATION WHEN p IS MUCH LARGER

نویسندگان

  • Emmanuel Candès
  • Terence Tao
چکیده

First of all, we would like to thank all the discussants for their interest and comments, as well as for their thorough investigation. The comments all underlie the importance and timeliness of the topics discussed in our paper, namely, accurate statistical estimation in high dimensions. We would also like to thank the editors for this opportunity to comment briefly on a few issues raised in the discussions. Of special interest is the diversity of perspectives, which include theoretical , practical and computational issues. With this being said, there are two main points in the discussions that are quite recurrent: 1. Is it possible to extend and refine our theoretical results, and how do they compare against the very recent literature? 2. How does the Dantzig Selector (DS) compare with the Lasso? We will address these issues in this rejoinder but before we begin, we would like to restate as simply as possible the main point of our paper and put this work in a broader context so as to avoid confusion about our point of view and motivations. 1. Our background. We assume a linear regression model y = Xβ + z, (1) where y is a p-dimensional vector of observations, X is an n by p design matrix and z is an n-dimensional vector which we take to be i.i.d. N (0, σ 2) for simplicity. We are interested in estimating the parameter vector β in the situation where the number p of variables is greater than the number n of observations. Under certain conditions on the design matrix X which roughly guarantee that the model is identifiable, the main message of the paper is as follows:

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Rejoinder : the Dantzig Selector : Statistical Estimation When P Is Much Larger Than

First of all, we would like to thank all the discussants for their interest and comments, as well as for their thorough investigation. The comments all underlie the importance and timeliness of the topics discussed in our paper, namely, accurate statistical estimation in high dimensions. We would also like to thank the editors for this opportunity to comment briefly on a few issues raised in th...

متن کامل

REJOINDER: THE DANTZIG SELECTOR: STATISTICAL ESTIMATION WHEN p IS MUCH LARGER THAN n BY EMMANUEL CANDÈS AND TERENCE TAO

First of all, we would like to thank all the discussants for their interest and comments, as well as for their thorough investigation. The comments all underlie the importance and timeliness of the topics discussed in our paper, namely, accurate statistical estimation in high dimensions. We would also like to thank the editors for this opportunity to comment briefly on a few issues raised in th...

متن کامل

DISCUSSION : THE DANTZIG SELECTOR : STATISTICAL ESTIMATION WHEN p IS MUCH LARGER THAN

given just a single parameter t . Two active-set methods were described in [11], with some concern about efficiency if p were large, where X is n× p . Later when basis pursuit de-noising (BPDN) was introduced [2], the intention was to deal with p very large and to allow X to be a sparse matrix or a fast operator. A primal–dual interior method was used to solve the associated quadratic program, ...

متن کامل

DISCUSSION : THE DANTZIG SELECTOR : STATISTICAL ESTIMATION WHEN p IS MUCH LARGER THAN

1. Introduction. This is a fascinating paper on an important topic: the choice of predictor variables in large-scale linear models. A previous paper in these pages attacked the same problem using the " LARS " algorithm (Efron, Hastie, Johnstone and Tibshirani [3]); actually three algorithms including the Lasso as middle case. There are tantalizing similarities between the Dantzig Selector (DS) ...

متن کامل

THE DANTZIG SELECTOR : STATISTICAL ESTIMATION WHEN p IS MUCH LARGER THAN

s n log p, where s is the dimension of the sparsest model. These are, respectively, the conditions of this paper using the Dantzig selector and those of Bunea, Tsybakov and Wegkamp [2] and Meinshausen and Yu [9] using the Lasso. Strictly speaking, Bunea, Tsybakov and Wegkamp consider only prediction, not l2 loss, but in a paper in preparation with Ritov and Tsybakov we show that the spirit of t...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2008